216 research outputs found

    Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations

    Get PDF
    Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks

    A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image Quality

    Get PDF
    Quality control (QC) of medical images is essential to ensure that downstream analyses such as segmentation can be performed successfully. Currently, QC is predominantly performed visually at significant time and operator cost. We aim to automate the process by formulating a probabilistic network that estimates uncertainty through a heteroscedastic noise model, hence providing a proxy measure of task-specific image quality that is learnt directly from the data. By augmenting the training data with different types of simulated k-space artefacts, we propose a novel cascading CNN architecture based on a student-teacher framework to decouple sources of uncertainty related to different k-space augmentations in an entirely self-supervised manner. This enables us to predict separate uncertainty quantities for the different types of data degradation. While the uncertainty measures reflect the presence and severity of image artefacts, the network also provides the segmentation predictions given the quality of the data. We show models trained with simulated artefacts provide informative measures of uncertainty on real-world images and we validate our uncertainty predictions on problematic images identified by human-raters

    Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021

    Full text link
    Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the \textit{VAscular Lesions DetectiOn and Segmentation} (\textit{Where is VALDO?}) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1 - EPVS, 9 for Task 2 - Microbleeds and 6 for Task 3 - Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1 - EPVS and Task 2 - Microbleeds and not practically useful results yet for Task 3 - Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level

    Automatic C-Plane Detection in Pelvic Floor Transperineal Volumetric Ultrasound

    Get PDF
    © 2020, Springer Nature Switzerland AG. Transperineal volumetric ultrasound (US) imaging has become routine practice for diagnosing pelvic floor disease (PFD). Hereto, clinical guidelines stipulate to make measurements in an anatomically defined 2D plane within a 3D volume, the so-called C-plane. This task is currently performed manually in clinical practice, which is labour-intensive and requires expert knowledge of pelvic floor anatomy, as no computer-aided C-plane method exists. To automate this process, we propose a novel, guideline-driven approach for automatic detection of the C-plane. The method uses a convolutional neural network (CNN) to identify extreme coordinates of the symphysis pubis and levator ani muscle (which define the C-plane) directly via landmark regression. The C-plane is identified in a postprocessing step. When evaluated on 100 US volumes, our best performing method (multi-task regression with UNet) achieved a mean error of 6.05 mm and 4.81 and took 20 s. Two experts blindly evaluated the quality of the automatically detected planes and manually defined the (gold standard) C-plane in terms of their clinical diagnostic quality. We show that the proposed method performs comparably to the manual definition. The automatic method reduces the average time to detect the C-plane by 100 s and reduces the need for high-level expertise in PFD US assessment

    ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

    Get PDF
    Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present a novel framework for creating class specific FA maps through image-to-image translation. We propose the use of a VAE-GAN to explicitly disentangle class relevance from background features for improved interpretability properties, which results in meaningful FA maps. We validate our method on 2D and 3D brain image datasets of dementia (ADNI dataset), ageing (UK Biobank), and (simulated) lesion detection. We show that FA maps generated by our method outperform baseline FA methods when validated against ground truth. More significantly, our approach is the first to use latent space sampling to support exploration of phenotype variation. Our code will be available online at https://github.com/CherBass/ICAM.Comment: Submitted to NeurIPS 2020: Neural Information Processing Systems. Keywords: interpretable, classification, feature attribution, domain translation, variational autoencoder, generative adversarial network, neuroimagin

    Hierarchical brain parcellation with uncertainty

    Get PDF
    Many atlases used for brain parcellation are hierarchically organised, progressively dividing the brain into smaller sub-regions. However, state-of-the-art parcellation methods tend to ignore this structure and treat labels as if they are `flat'. We introduce a hierarchically-aware brain parcellation method that works by predicting the decisions at each branch in the label tree. We further show how this method can be used to model uncertainty separately for every branch in this label tree. Our method exceeds the performance of flat uncertainty methods, whilst also providing decomposed uncertainty estimates that enable us to obtain self-consistent parcellations and uncertainty maps at any level of the label hierarchy. We demonstrate a simple way these decision-specific uncertainty maps may be used to provided uncertainty-thresholded tissue maps at any level of the label tree.Comment: To be published in the MICCAI 2020 workshop: Uncertainty for Safe Utilization of Machine Learning in Medical Imagin

    APOE ?4 status is associated with white matter hyperintensities volume accumulation rate independent of AD diagnosis.

    Get PDF
    To assess the relationship between carriage of APOE ?4 allele and evolution of white matter hyperintensities (WMHs) volume, we longitudinally studied 339 subjects from the Alzheimer's Disease Neuroimaging Initiative cohort with diagnoses ranging from normal controls to probable Alzheimer's disease (AD). A purpose-built longitudinal automatic method was used to segment WMH using constraints derived from an atlas-based model selection applied to a time-averaged image. Linear mixed models were used to evaluate the differences in rate of change across diagnosis and genetic groups. After adjustment for covariates (age, sex, and total intracranial volume), homozygous APOE ?4?4 subjects had a significantly higher rate of WMH accumulation (22.5% per year 95% CI [14.4, 31.2] for a standardized population having typical values of covariates) compared with the heterozygous (?4?3) subjects (10.0% per year [6.7, 13.4]) and homozygous ?3?3 (6.6% per year [4.1, 9.3]) subjects. Rates of accumulation increased with diagnostic severity; controls accumulated 5.8% per year 95% CI: [2.2, 9.6] for the standardized population, early mild cognitive impairment 6.6% per year [3.9, 9.4], late mild cognitive impairment 12.5% per year [8.2, 17.0] and AD subjects 14.7% per year [6.0, 24.0]. Following adjustment for APOE status, these differences became nonstatistically significant suggesting that APOE ?4 genotype is the major driver of accumulation of WMH volume rather than diagnosis of AD

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Assessment of Microvascular Disease in Heart and Brain by MRI: Application in Heart Failure with Preserved Ejection Fraction and Cerebral Small Vessel Disease

    Get PDF
    The objective of this review is to investigate the commonalities of microvascular (small vessel) disease in heart failure with preserved ejection fraction (HFpEF) and cerebral small vessel disease (CSVD). Furthermore, the review aims to evaluate the current magnetic resonance imaging (MRI) diagnostic techniques for both conditions. By comparing the two conditions, this review seeks to identify potential opportunities to improve the understanding of both HFpEF and CSVD
    • …
    corecore